Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A conditional latent autoregressive recurrent model for generation and forecasting of beam dynamics in particle accelerators (2403.13858v1)

Published 19 Mar 2024 in physics.acc-ph, cs.CV, and cs.LG

Abstract: Particle accelerators are complex systems that focus, guide, and accelerate intense charged particle beams to high energy. Beam diagnostics present a challenging problem due to limited non-destructive measurements, computationally demanding simulations, and inherent uncertainties in the system. We propose a two-step unsupervised deep learning framework named as Conditional Latent Autoregressive Recurrent Model (CLARM) for learning the spatiotemporal dynamics of charged particles in accelerators. CLARM consists of a Conditional Variational Autoencoder (CVAE) transforming six-dimensional phase space into a lower-dimensional latent distribution and a Long Short-Term Memory (LSTM) network capturing temporal dynamics in an autoregressive manner. The CLARM can generate projections at various accelerator modules by sampling and decoding the latent space representation. The model also forecasts future states (downstream locations) of charged particles from past states (upstream locations). The results demonstrate that the generative and forecasting ability of the proposed approach is promising when tested against a variety of evaluation metrics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (70)
  1. Amato, F., Guignard, F., Robert, S., Kanevski, M.: A novel framework for spatio-temporal prediction of environmental data using deep learning. Scientific reports 10(1), 22243 (2020) Zhou et al. [2022] Zhou, Z., Yang, X., Rossi, R., Zhao, H., Yu, R.: Neural point process for learning spatiotemporal event dynamics. In: Learning for Dynamics and Control Conference, pp. 777–789 (2022). PMLR Vinuesa and Brunton [2022] Vinuesa, R., Brunton, S.L.: Enhancing computational fluid dynamics with machine learning. Nature Computational Science 2(6), 358–366 (2022) Rautela et al. [2022] Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhou, Z., Yang, X., Rossi, R., Zhao, H., Yu, R.: Neural point process for learning spatiotemporal event dynamics. In: Learning for Dynamics and Control Conference, pp. 777–789 (2022). PMLR Vinuesa and Brunton [2022] Vinuesa, R., Brunton, S.L.: Enhancing computational fluid dynamics with machine learning. Nature Computational Science 2(6), 358–366 (2022) Rautela et al. [2022] Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vinuesa, R., Brunton, S.L.: Enhancing computational fluid dynamics with machine learning. Nature Computational Science 2(6), 358–366 (2022) Rautela et al. [2022] Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  2. Zhou, Z., Yang, X., Rossi, R., Zhao, H., Yu, R.: Neural point process for learning spatiotemporal event dynamics. In: Learning for Dynamics and Control Conference, pp. 777–789 (2022). PMLR Vinuesa and Brunton [2022] Vinuesa, R., Brunton, S.L.: Enhancing computational fluid dynamics with machine learning. Nature Computational Science 2(6), 358–366 (2022) Rautela et al. [2022] Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vinuesa, R., Brunton, S.L.: Enhancing computational fluid dynamics with machine learning. Nature Computational Science 2(6), 358–366 (2022) Rautela et al. [2022] Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  3. Vinuesa, R., Brunton, S.L.: Enhancing computational fluid dynamics with machine learning. Nature Computational Science 2(6), 358–366 (2022) Rautela et al. [2022] Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  4. Rautela, M., Huber, A., Senthilnath, J., Gopalakrishnan, S.: Inverse characterization of composites using guided waves and convolutional neural networks with dual-branch feature fusion. Mechanics of Advanced Materials and Structures 29(27), 6595–6611 (2022) Huerta et al. [2021] Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  5. Huerta, E., Khan, A., Huang, X., Tian, M., Levental, M., Chard, R., Wei, W., Heflin, M., Katz, D.S., Kindratenko, V., et al.: Accelerated, scalable and reproducible ai-driven gravitational wave detection. Nature Astronomy 5(10), 1062–1068 (2021) Boehnlein et al. [2022] Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  6. Boehnlein, A., Diefenthaler, M., Sato, N., Schram, M., Ziegler, V., Fanelli, C., Hjorth-Jensen, M., Horn, T., Kuchera, M.P., Lee, D., et al.: Colloquium: Machine learning in nuclear physics. Reviews of Modern Physics 94(3), 031003 (2022) Gonoskov et al. [2019] Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  7. Gonoskov, A., Wallin, E., Polovinkin, A., Meyerov, I.: Employing machine learning for theory validation and identification of experimental conditions in laser-plasma physics. Scientific reports 9(1), 7043 (2019) AlQuraishi and Sorger [2021] AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  8. AlQuraishi, M., Sorger, P.K.: Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature methods 18(10), 1169–1180 (2021) Reichstein et al. [2019] Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  9. Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, f.: Deep learning and process understanding for data-driven earth system science. Nature 566(7743), 195–204 (2019) Scheinker and Pokharel [2023] Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  10. Scheinker, A., Pokharel, R.: Physics-constrained 3D convolutional neural networks for electrodynamics. APL Machine Learning 1(2) (2023) Wandel et al. [2021] Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  11. Wandel, N., Weinmann, M., Klein, R.: Teaching the incompressible navier–stokes equations to fast neural surrogate models in three dimensions. Physics of Fluids 33(4) (2021) Shi et al. [2015] Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  12. Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems 28 (2015) Cheng et al. [2020] Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  13. Cheng, M., Fang, F., Pain, C.C., Navon, I.: Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Computer Methods in Applied Mechanics and Engineering 365, 113000 (2020) Kipf and Welling [2016] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  14. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) Chen et al. [2021] Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  15. Chen, J., Hachem, E., Viquerat, J.: Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids 33(12) (2021) Scheinker [2021] Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  16. Scheinker, A.: Adaptive machine learning for time-varying systems: low dimensional latent space tuning. Journal of Instrumentation 16(10), 10008 (2021) Montes de Oca Zapiain et al. [2021] Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  17. Oca Zapiain, D., Stewart, J.A., Dingreville, R.: Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials 7(1), 3 (2021) Wiewel et al. [2019] Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  18. Wiewel, S., Becher, M., Thuerey, N.: Latent space physics: Towards learning the temporal evolution of fluid flow. In: Computer Graphics Forum, vol. 38, pp. 71–82 (2019). Wiley Online Library Nakamura et al. [2021] Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  19. Nakamura, T., Fukami, K., Hasegawa, K., Nabae, Y., Fukagata, K.: Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Physics of Fluids 33(2) (2021) Maulik et al. [2021] Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  20. Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Physics of Fluids 33(3) (2021) Vlachas et al. [2022] Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  21. Vlachas, P.R., Arampatzis, G., Uhler, C., Koumoutsakos, P.: Multiscale simulations of complex systems by learning their effective dynamics. Nature Machine Intelligence 4(4), 359–366 (2022) Rautela et al. [2022] Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  22. Rautela, M., Senthilnath, J., Monaco, E., Gopalakrishnan, S.: Delamination prediction in composite panels using unsupervised-feature learning methods with wavelet-enhanced guided wave representations. Composite Structures 291, 115579 (2022) Scheinker et al. [2023] Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  23. Scheinker, A., Cropp, F., Filippetto, D.: Adaptive autoencoder latent space tuning for more robust machine learning beyond the training set for six-dimensional phase space diagnostics of a time-varying ultrafast electron-diffraction compact accelerator. Physical Review E 107(4), 045302 (2023) Kingma and Welling [2013] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  24. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013) Hartmann et al. [2022] Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  25. Hartmann, G., Goetzke, G., Düsterer, S., Feuer-Forson, P., Lever, F., Meier, D., Möller, F., Vera Ramirez, L., Guehr, M., Tiedtke, K., et al.: Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics. Scientific Reports 12(1), 20783 (2022) Scheinker et al. [2023] Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  26. Scheinker, A., Filippetto, D., Cropp, F.: 6d phase space diagnostics based on adaptively tuned physics-informed generative convolutional neural networks. In: Journal of Physics: Conference Series, vol. 2420, p. 012068 (2023). IOP Publishing Cathey et al. [2018] Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  27. Cathey, B., Cousineau, S., Aleksandrov, A., Zhukov, A.: First six dimensional phase space measurement of an accelerator beam. Physical Review Letters 121(6), 064804 (2018) Tenenbaum [2005] Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  28. Tenenbaum, P.: Lucretia: A matlab-based toolbox for the modelling and simulation of single-pass electron beam transport systems. In: Proceedings of the 2005 Particle Accelerator Conference, pp. 4197–4199 (2005). IEEE Young and Billen [2003] Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  29. Young, L., Billen, J.: The particle tracking code parmela. In: Proceedings of the Particle Accelerator Conference, vol. 5, pp. 3521–3523 (2003) Pang and Rybarcyk [2014] Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  30. Pang, X., Rybarcyk, L.: Gpu accelerated online multi-particle beam dynamics simulator for ion linear particle accelerators. Computer Physics Communications 185(3), 744–753 (2014) Adelmann [2019] Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  31. Adelmann, A.: On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA Journal on Uncertainty Quantification 7(2), 383–416 (2019) Newton [1970] Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  32. Newton, R.: Inverse problems in physics. SIAM Review 12(3), 346–356 (1970) Wolski et al. [2022] Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  33. Wolski, A., Johnson, M.A., King, M., Militsyn, B.L., Williams, P.H.: Transverse phase space tomography in an accelerator test facility using image compression and machine learning. Physical Review Accelerators and Beams 25(12), 122803 (2022) Mayet et al. [2022] Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  34. Mayet, F., Hachmann, M., Floettmann, K., Burkart, F., Dinter, H., Kuropka, W., Vinatier, T., Assmann, R.: Predicting the transverse emittance of space charge dominated beams using the phase advance scan technique and a fully connected neural network. Physical Review Accelerators and Beams 25(9), 094601 (2022) Zhu et al. [2] Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  35. Zhu, J., Chen, Y., Brinker, F., Decking, W., Tomin, S., Schlarb, H.: High-fidelity prediction of megapixel longitudinal phase-space images of electron beams using encoder-decoder neural networks. Physical Review Applied 16(2), 024005 (2) Emma et al. [2018] Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  36. Emma, C., Edelen, A., Hogan, M., O’Shea, B., White, G., Yakimenko, V.: Machine learning-based longitudinal phase space prediction of particle accelerators. Physical Review Accelerators and Beams 21(11), 112802 (2018) Caliari et al. [2023] Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  37. Caliari, C., Oeftiger, A., Boine-Frankenheim, O.: Identification of magnetic field errors in synchrotrons based on deep lie map networks. Physical Review Accelerators and Beams 26(6), 064601 (2023) Breckwoldt et al. [2023] Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  38. Breckwoldt, N., Son, S.-K., Mazza, T., Rörig, A., Boll, R., Meyer, M., LaForge, A.C., Mishra, D., Berrah, N., Santra, R., et al.: Machine-learning calibration of intense x-ray free-electron-laser pulses using bayesian optimization. Physical Review Research 5(2), 023114 (2023) Ivanov and Agapov [2020] Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  39. Ivanov, A., Agapov, I.: Physics-based deep neural networks for beam dynamics in charged particle accelerators. Physical review accelerators and beams 23(7), 074601 (2020) Meier et al. [2022] Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  40. Meier, D., Ramirez, L.V., Völker, J., Viefhaus, J., Sick, B., Hartmann, G.: Optimizing a superconducting radio-frequency gun using deep reinforcement learning. Physical Review Accelerators and Beams 25(10), 104604 (2022) Obermair et al. [2022] Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  41. Obermair, C., Cartier-Michaud, T., Apollonio, A., Millar, W., Felsberger, L., Fischl, L., Bovbjerg, H.S., Wollmann, D., Wuensch, W., Catalan-Lasheras, N., et al.: Explainable machine learning for breakdown prediction in high gradient rf cavities. Physical Review Accelerators and Beams 25(10), 104601 (2022) Tennant et al. [2020] Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  42. Tennant, C., Carpenter, A., Powers, T., Solopova, A.S., Vidyaratne, L., Iftekharuddin, K.: Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory. Physical Review Accelerators and Beams 23(11), 114601 (2020) Li et al. [2018] Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  43. Li, Y., Cheng, W., Yu, L.H., Rainer, R.: Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Physical Review Accelerators and Beams 21(5), 054601 (2018) Cropp et al. [2023] Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  44. Cropp, F., Moos, L., Scheinker, A., Gilardi, A., Wang, D., Paiagua, S., Serrano, C., Musumeci, P., Filippetto, D.: Virtual-diagnostic-based time stamping for ultrafast electron diffraction. Physical Review Accelerators and Beams 26(5), 052801 (2023) Scheinker and Scheinker [2016] Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  45. Scheinker, A., Scheinker, D.: Bounded extremum seeking with discontinuous dithers. Automatica 69, 250–257 (2016) Nelson [1998] Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  46. Nelson, B.K.: Time series analysis using autoregressive integrated moving average (arima) models. Academic emergency medicine 5(7), 739–744 (1998) Germain et al. [2015] Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  47. Germain, M., Gregor, K., Murray, I., Larochelle, H.: Made: Masked autoencoder for distribution estimation. In: International Conference on Machine Learning, pp. 881–889 (2015). PMLR Uria et al. [2016] Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  48. Uria, B., Côté, M.-A., Gregor, K., Murray, I., Larochelle, H.: Neural autoregressive distribution estimation. The Journal of Machine Learning Research 17(1), 7184–7220 (2016) Toneva et al. [2022] Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  49. Toneva, M., Mitchell, T.M., Wehbe, L.: Combining computational controls with natural text reveals aspects of meaning composition. Nature computational science 2(11), 745–757 (2022) Acharya et al. [2023] Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  50. Acharya, A., Russell, R., Ahmed, N.R.: Learning to forecast aleatoric and epistemic uncertainties over long horizon trajectories. arXiv preprint arXiv:2302.08669 (2023) Wu et al. [2020] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  51. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4–24 (2020) Wangler [2008] Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  52. Wangler, T.P.: RF Linear Accelerators. John Wiley & Sons, ??? (2008) Scheinker et al. [2021] Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  53. Scheinker, A., Huang, E.-C., Taylor, C.: Extremum seeking-based control system for particle accelerator beam loss minimization. IEEE Transactions on Control Systems Technology 30(5), 2261–2268 (2021) Sohn et al. [2015] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  54. Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015) Yin et al. [2021] Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  55. Yin, J., Pei, Z., Gao, M.C.: Neural network-based order parameter for phase transitions and its applications in high-entropy alloys. Nature Computational Science 1(10), 686–693 (2021) Notin et al. [2021] Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  56. Notin, P., Hernández-Lobato, J.M., Gal, Y.: Improving black-box optimization in vae latent space using decoder uncertainty. Advances in Neural Information Processing Systems 34, 802–814 (2021) Lim et al. [2018] Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  57. Lim, J., Ryu, S., Kim, J.W., Kim, W.Y.: Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics 10(1), 1–9 (2018) Karevan and Suykens [2020] Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  58. Karevan, Z., Suykens, J.A.: Transductive lstm for time-series prediction: An application to weather forecasting. Neural Networks 125, 1–9 (2020) Sak et al. [2014] Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  59. Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014) Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  60. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004) Van der Maaten and Hinton [2008] Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  61. Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008) McInnes et al. [2018] McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  62. McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) Rautela et al. [2022] Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  63. Rautela, M., Senthilnath, J., Huber, A., Gopalakrishnan, S.: Towards deep generation of guided wave representations for composite materials. IEEE Transactions on Artificial Intelligence (2022) Williams et al. [2023] Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  64. Williams, A., Scheinker, A., Huang, E.-C., Taylor, C., Krstic, M.: Experimental safe extremum seeking for accelerators. arXiv preprint arXiv:2308.15584 (2023) Heusel et al. [2017] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  65. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) Sajjadi et al. [2018] Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  66. Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. Advances in neural information processing systems 31 (2018) Giannone et al. [2023] Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  67. Giannone, G., Srivastava, A., Winther, O., Ahmed, F.: Aligning optimization trajectories with diffusion models for constrained design generation. arXiv preprint arXiv:2305.18470 (2023) Rezende et al. [2017] Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  68. Rezende, E., Ruppert, G., Carvalho, T., Ramos, F., De Geus, P.: Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1011–1014 (2017). IEEE He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  69. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) Berthelot et al. [2018] Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018) Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
  70. Berthelot, D., Raffel, C., Roy, A., Goodfellow, I.: Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 (2018)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets